37 research outputs found
Evaluating Graph Signal Processing for Neuroimaging Through Classification and Dimensionality Reduction
Graph Signal Processing (GSP) is a promising framework to analyze
multi-dimensional neuroimaging datasets, while taking into account both the
spatial and functional dependencies between brain signals. In the present work,
we apply dimensionality reduction techniques based on graph representations of
the brain to decode brain activity from real and simulated fMRI datasets. We
introduce seven graphs obtained from a) geometric structure and/or b)
functional connectivity between brain areas at rest, and compare them when
performing dimension reduction for classification. We show that mixed graphs
using both a) and b) offer the best performance. We also show that graph
sampling methods perform better than classical dimension reduction including
Principal Component Analysis (PCA) and Independent Component Analysis (ICA).Comment: 5 pages, GlobalSIP 201
Graph reconstruction from the observation of diffused signals
Signal processing on graphs has received a lot of attention in the recent
years. A lot of techniques have arised, inspired by classical signal processing
ones, to allow studying signals on any kind of graph. A common aspect of these
technique is that they require a graph correctly modeling the studied support
to explain the signals that are observed on it. However, in many cases, such a
graph is unavailable or has no real physical existence. An example of this
latter case is a set of sensors randomly thrown in a field which obviously
observe related information. To study such signals, there is no intuitive
choice for a support graph. In this document, we address the problem of
inferring a graph structure from the observation of signals, under the
assumption that they were issued of the diffusion of initially i.i.d. signals.
To validate our approach, we design an experimental protocol, in which we
diffuse signals on a known graph. Then, we forget the graph, and show that we
are able to retrieve it very precisely from the only knowledge of the diffused
signals.Comment: Allerton 2015 : 53th Annual Allerton Conference on Communication,
Control and Computing, 30 september - 02 october 2015, Allerton, United
States, 201
Characterization and Inference of Graph Diffusion Processes from Observations of Stationary Signals
Many tools from the field of graph signal processing exploit knowledge of the
underlying graph's structure (e.g., as encoded in the Laplacian matrix) to
process signals on the graph. Therefore, in the case when no graph is
available, graph signal processing tools cannot be used anymore. Researchers
have proposed approaches to infer a graph topology from observations of signals
on its nodes. Since the problem is ill-posed, these approaches make
assumptions, such as smoothness of the signals on the graph, or sparsity
priors. In this paper, we propose a characterization of the space of valid
graphs, in the sense that they can explain stationary signals. To simplify the
exposition in this paper, we focus here on the case where signals were i.i.d.
at some point back in time and were observed after diffusion on a graph. We
show that the set of graphs verifying this assumption has a strong connection
with the eigenvectors of the covariance matrix, and forms a convex set. Along
with a theoretical study in which these eigenvectors are assumed to be known,
we consider the practical case when the observations are noisy, and
experimentally observe how fast the set of valid graphs converges to the set
obtained when the exact eigenvectors are known, as the number of observations
grows. To illustrate how this characterization can be used for graph recovery,
we present two methods for selecting a particular point in this set under
chosen criteria, namely graph simplicity and sparsity. Additionally, we
introduce a measure to evaluate how much a graph is adapted to signals under a
stationarity assumption. Finally, we evaluate how state-of-the-art methods
relate to this framework through experiments on a dataset of temperatures.Comment: Submitted to IEEE Transactions on Signal and Information Processing
over Network
Vers une caractérisation de la courbe d'incertitude pour des graphes portant des signaux
National audienceLe traitement de signal sur graphes est un domaine récent visant à généraliser les outils classiques du traitement de signal, afin d'analyser des signaux évoluant sur des domaines complexes. Ces domaines sont représentés par des graphes pour lesquels on peut calculer une matrice appelée Laplacien normalisé. Il a été montré que les valeurs propres de ce Laplacien correspondent aux fréquences du domaine de Fourier en traitement de signal classique. Ainsi, le domaine fréquentiel n'est pas identique pour tout graphe support des signaux. Une conséquence est qu'il n'y a pas de généralisation non triviale du principe d'incertitude d'Heisenberg, indiquant qu'un signal ne peut être à la fois localisé dans le domaine temporel et dans le domaine fréquentiel. Une manière de généraliser ce principe, introduite par Agaskar & Lu, consiste à déterminer une courbe servant de borne inférieure au compromis entre précision dans le domaine du graphe et précision dans le domaine spectral. L'objectif de ce papier est de proposer une caractérisation des signaux atteignant cette courbe, pour une classe de graphes plus générique que celle étudiée par Agaskar & Lu
Toward An Uncertainty Principle For Weighted Graphs
International audienceThe uncertainty principle states that a signal cannot be localized both in time and frequency. With the aim of extending this result to signals on graphs, Agaskar & Lu introduce notions of graph and spectral spreads. They show that a graph uncertainty principle holds for some families of unweighted graphs. This principle states that a signal cannot be simultaneously localized both in graph and spectral domains. In this paper, we aim to extend their work to weighted graphs. We show that a naive extension of their definitions leads to inconsistent results such as discontinuity of the graph spread when regarded as a function of the graph structure. To circumvent this problem, we propose another definition of graph spread that relies on an inverse similarity matrix. We also discuss the choice of the distance function that appears in this definition. Finally, we compute and plot uncertainty curves for families of weighted graphs
A Strong and Simple Deep Learning Baseline for BCI MI Decoding
We propose EEG-SimpleConv, a straightforward 1D convolutional neural network
for Motor Imagery decoding in BCI. Our main motivation is to propose a very
simple baseline to compare to, using only very standard ingredients from the
literature. We evaluate its performance on four EEG Motor Imagery datasets,
including simulated online setups, and compare it to recent Deep Learning and
Machine Learning approaches. EEG-SimpleConv is at least as good or far more
efficient than other approaches, showing strong knowledge-transfer capabilities
across subjects, at the cost of a low inference time. We advocate that using
off-the-shelf ingredients rather than coming with ad-hoc solutions can
significantly help the adoption of Deep Learning approaches for BCI. We make
the code of the models and the experiments accessible
Spatial Graph Signal Interpolation with an Application for Merging BCI Datasets with Various Dimensionalities
BCI Motor Imagery datasets usually are small and have different electrodes
setups. When training a Deep Neural Network, one may want to capitalize on all
these datasets to increase the amount of data available and hence obtain good
generalization results. To this end, we introduce a spatial graph signal
interpolation technique, that allows to interpolate efficiently multiple
electrodes. We conduct a set of experiments with five BCI Motor Imagery
datasets comparing the proposed interpolation with spherical splines
interpolation. We believe that this work provides novel ideas on how to
leverage graphs to interpolate electrodes and on how to homogenize multiple
datasets.Comment: Submitted to the 2023 IEEE International Conference on Acoustics,
Speech, and Signal Processing (ICASSP 2023
A Statistical Model for Predicting Generalization in Few-Shot Classification
The estimation of the generalization error of classifiers often relies on a
validation set. Such a set is hardly available in few-shot learning scenarios,
a highly disregarded shortcoming in the field. In these scenarios, it is common
to rely on features extracted from pre-trained neural networks combined with
distance-based classifiers such as nearest class mean. In this work, we
introduce a Gaussian model of the feature distribution. By estimating the
parameters of this model, we are able to predict the generalization error on
new classification tasks with few samples. We observe that accurate distance
estimates between class-conditional densities are the key to accurate estimates
of the generalization performance. Therefore, we propose an unbiased estimator
for these distances and integrate it in our numerical analysis. We show that
our approach outperforms alternatives such as the leave-one-out
cross-validation strategy in few-shot settings